2,142 research outputs found
Bayesian subset simulation
We consider the problem of estimating a probability of failure ,
defined as the volume of the excursion set of a function above a given threshold, under a given
probability measure on . In this article, we combine the popular
subset simulation algorithm (Au and Beck, Probab. Eng. Mech. 2001) and our
sequential Bayesian approach for the estimation of a probability of failure
(Bect, Ginsbourger, Li, Picheny and Vazquez, Stat. Comput. 2012). This makes it
possible to estimate when the number of evaluations of is very
limited and is very small. The resulting algorithm is called Bayesian
subset simulation (BSS). A key idea, as in the subset simulation algorithm, is
to estimate the probabilities of a sequence of excursion sets of above
intermediate thresholds, using a sequential Monte Carlo (SMC) approach. A
Gaussian process prior on is used to define the sequence of densities
targeted by the SMC algorithm, and drive the selection of evaluation points of
to estimate the intermediate probabilities. Adaptive procedures are
proposed to determine the intermediate thresholds and the number of evaluations
to be carried out at each stage of the algorithm. Numerical experiments
illustrate that BSS achieves significant savings in the number of function
evaluations with respect to other Monte Carlo approaches
Bayesian Subset Simulation: a kriging-based subset simulation algorithm for the estimation of small probabilities of failure
The estimation of small probabilities of failure from computer simulations is
a classical problem in engineering, and the Subset Simulation algorithm
proposed by Au & Beck (Prob. Eng. Mech., 2001) has become one of the most
popular method to solve it. Subset simulation has been shown to provide
significant savings in the number of simulations to achieve a given accuracy of
estimation, with respect to many other Monte Carlo approaches. The number of
simulations remains still quite high however, and this method can be
impractical for applications where an expensive-to-evaluate computer model is
involved. We propose a new algorithm, called Bayesian Subset Simulation, that
takes the best from the Subset Simulation algorithm and from sequential
Bayesian methods based on kriging (also known as Gaussian process modeling).
The performance of this new algorithm is illustrated using a test case from the
literature. We are able to report promising results. In addition, we provide a
numerical study of the statistical properties of the estimator.Comment: 11th International Probabilistic Assessment and Management Conference
(PSAM11) and The Annual European Safety and Reliability Conference (ESREL
2012), Helsinki : Finland (2012
A Bayesian approach to constrained single- and multi-objective optimization
This article addresses the problem of derivative-free (single- or
multi-objective) optimization subject to multiple inequality constraints. Both
the objective and constraint functions are assumed to be smooth, non-linear and
expensive to evaluate. As a consequence, the number of evaluations that can be
used to carry out the optimization is very limited, as in complex industrial
design optimization problems. The method we propose to overcome this difficulty
has its roots in both the Bayesian and the multi-objective optimization
literatures. More specifically, an extended domination rule is used to handle
objectives and constraints in a unified way, and a corresponding expected
hyper-volume improvement sampling criterion is proposed. This new criterion is
naturally adapted to the search of a feasible point when none is available, and
reduces to existing Bayesian sampling criteria---the classical Expected
Improvement (EI) criterion and some of its constrained/multi-objective
extensions---as soon as at least one feasible point is available. The
calculation and optimization of the criterion are performed using Sequential
Monte Carlo techniques. In particular, an algorithm similar to the subset
simulation method, which is well known in the field of structural reliability,
is used to estimate the criterion. The method, which we call BMOO (for Bayesian
Multi-Objective Optimization), is compared to state-of-the-art algorithms for
single- and multi-objective constrained optimization
Estimating derivatives and integrals with Kriging
International audienceThis paper formalizes a methodology based on Kriging, a technique developped by geostatisticians, for estimating derivatives and integrals of signals that are only known via possibly irregularly spaced and noisy observations. This finds direct applications, e.g., in system identification when differential algebra is used to express parameters as nonlinear functions of the inputs and outputs and their derivatives. The procedure is quite simple to implement, and allows confidence intervals on the predicted values to be derived
Identification boîte noire et simulation de systèmes non-linéaires à temps continu par prédiction linéaire de processus aléatoires
Cet article propose des méthodes de prédiction linéaire de processus aléatoires pour l'identification boîte noire et la simulation de systèmes dynamiques non-linéaires à temps continu. La méthode d'identification proposée utilise des observations bruitées du vecteur d'état à des instants quelconques. Elle comporte deux étapes distinctes. La première est l'estimation des dérivées temporelles de l'état et la deuxième est l'approximation du champ de vecteurs gouvernant la dynamique. Pour la simulation du système, nous proposons un nouveau schéma d'intégration numérique permettant de prendre en compte de manière consistante l'erreur d'approximation du champ de vecteur
Gaussian process modeling for stochastic multi-fidelity simulators, with application to fire safety
To assess the possibility of evacuating a building in case of a fire, a
standard method consists in simulating the propagation of fire, using finite
difference methods and takes into account the random behavior of the fire, so
that the result of a simulation is non-deterministic. The mesh fineness tunes
the quality of the numerical model, and its computational cost. Depending on
the mesh fineness, one simulation can last anywhere from a few minutes to
several weeks. In this article, we focus on predicting the behavior of the fire
simulator at fine meshes, using cheaper results, at coarser meshes. In the
literature of the design and analysis of computer experiments, such a problem
is referred to as multi-fidelity prediction. Our contribution is to extend to
the case of stochastic simulators the Bayesian multi-fidelity model proposed by
Picheny and Ginsbourger (2013) and Tuo et al. (2014)
Sequential search based on kriging: convergence analysis of some algorithms
Let \FF be a set of real-valued functions on a set \XX and let S:\FF \to
\GG be an arbitrary mapping. We consider the problem of making inference about
, with f\in\FF unknown, from a finite set of pointwise evaluations of
. We are mainly interested in the problems of approximation and
optimization. In this article, we make a brief review of results concerning
average error bounds of Bayesian search methods that use a random process prior
about
The Informational Approach to Global Optimization in presence of very noisy evaluation results. Application to the optimization of renewable energy integration strategies
We consider the problem of global optimization of a function f from very
noisy evaluations. We adopt a Bayesian sequential approach: evaluation points
are chosen so as to reduce the uncertainty about the position of the global
optimum of f, as measured by the entropy of the corresponding random variable
(Informational Approach to Global Optimization, Villemonteix et al., 2009).
When evaluations are very noisy, the error coming from the estimation of the
entropy using conditional simulations becomes non negligible compared to its
variations on the input domain. We propose a solution to this problem by
choosing evaluation points as if several evaluations were going to be made at
these points. The method is applied to the optimization of a strategy for the
integration of renewable energies into an electrical distribution network
An informational approach to the global optimization of expensive-to-evaluate functions
In many global optimization problems motivated by engineering applications,
the number of function evaluations is severely limited by time or cost. To
ensure that each evaluation contributes to the localization of good candidates
for the role of global minimizer, a sequential choice of evaluation points is
usually carried out. In particular, when Kriging is used to interpolate past
evaluations, the uncertainty associated with the lack of information on the
function can be expressed and used to compute a number of criteria accounting
for the interest of an additional evaluation at any given point. This paper
introduces minimizer entropy as a new Kriging-based criterion for the
sequential choice of points at which the function should be evaluated. Based on
\emph{stepwise uncertainty reduction}, it accounts for the informational gain
on the minimizer expected from a new evaluation. The criterion is approximated
using conditional simulations of the Gaussian process model behind Kriging, and
then inserted into an algorithm similar in spirit to the \emph{Efficient Global
Optimization} (EGO) algorithm. An empirical comparison is carried out between
our criterion and \emph{expected improvement}, one of the reference criteria in
the literature. Experimental results indicate major evaluation savings over
EGO. Finally, the method, which we call IAGO (for Informational Approach to
Global Optimization) is extended to robust optimization problems, where both
the factors to be tuned and the function evaluations are corrupted by noise.Comment: Accepted for publication in the Journal of Global Optimization (This
is the revised version, with additional details on computational problems,
and some grammatical changes
- …